44 research outputs found

    Edge-Caching Wireless Networks: Performance Analysis and Optimization

    Get PDF
    Edge-caching has received much attention as an efficient technique to reduce delivery latency and network congestion during peak-traffic times by bringing data closer to end users. Existing works usually design caching algorithms separately from physical layer design. In this paper, we analyse edge-caching wireless networks by taking into account the caching capability when designing the signal transmission. Particularly, we investigate multi-layer caching where both base station (BS) and users are capable of storing content data in their local cache and analyse the performance of edge-caching wireless networks under two notable uncoded and coded caching strategies. Firstly, we propose a coded caching strategy that is applied to arbitrary values of cache size. The required backhaul and access rates are derived as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Based on the derived formulas, the system EE is maximized via precoding vectors design and optimization while satisfying a predefined user request rate. Thirdly, two optimization problems are proposed to minimize the content delivery time for the two caching strategies. Finally, numerical results are presented to verify the effectiveness of the two caching methods.Comment: to appear in IEEE Trans. Wireless Commu

    Task-Based Information Compression for Multi-Agent Communication Problems with Channel Rate Constraints

    Get PDF
    A collaborative task is assigned to a multiagent system (MAS) in which agents are allowed to communicate. The MAS runs over an underlying Markov decision process and its task is to maximize the averaged sum of discounted one-stage rewards. Although knowing the global state of the environment is necessary for the optimal action selection of the MAS, agents are limited to individual observations. The inter-agent communication can tackle the issue of local observability, however, the limited rate of the inter-agent communication prevents the agent from acquiring the precise global state information. To overcome this challenge, agents need to communicate their observations in a compact way such that the MAS compromises the minimum possible sum of rewards. We show that this problem is equivalent to a form of rate-distortion problem which we call the task-based information compression. We introduce a scheme for task-based information compression titled State aggregation for information compression (SAIC), for which a state aggregation algorithm is analytically designed. The SAIC is shown to be capable of achieving near-optimal performance in terms of the achieved sum of discounted rewards. The proposed algorithm is applied to a rendezvous problem and its performance is compared with several benchmarks. Numerical experiments confirm the superiority of the proposed algorithm.Comment: 13 pages, 9 figure

    UAV Relay-Assisted Emergency Communications in IoT Networks: Resource Allocation and Trajectory Optimization

    Get PDF
    In this paper, a UAV is deployed as a flying base station to collect data from time-constrained IoT devices and then transfer the data to a ground gateway (GW). In general, the latency constraint at IoT users and the limited storage capacity of UAV highly hinder practical applications of UAV-assisted IoT networks. In this paper, full-duplex (FD) technique is adopted at the UAV to overcome these challenges. In addition, half-duplex (HD) scheme for UAV-based relaying is also considered to provide a comparative study between two modes. In this context, we aim at maximizing the number of served IoT devices by jointly optimizing bandwidth and power allocation, as well as the UAV trajectory, while satisfying the requested timeout (RT) requirement of each device and the UAV's limited storage capacity. The formulated optimization problem is troublesome to solve due to its non-convexity and combinatorial nature. Toward appealing applications, we first relax binary variables into continuous values and transform the original problem into a more computationally tractable form. By leveraging inner approximation framework, we derive newly approximated functions for non-convex parts and then develop a simple yet efficient iterative algorithm for its solutions. Next, we attempt to maximize the total throughput subject to the number of served IoT devices. Finally, numerical results show that the proposed algorithms significantly outperform benchmark approaches in terms of the number of served IoT devices and the amount of collected data.Comment: 30 pages, 11 figure

    Adaptive Compression and Joint Detection for Fronthaul Uplinks in Cloud Radio Access Networks

    Full text link

    Machine Learning-Enabled Joint Antenna Selection and Precoding Design: From Offline Complexity to Online Performance

    Get PDF
    We investigate the performance of multi-user multiple-antenna downlink systems in which a BS serves multiple users via a shared wireless medium. In order to fully exploit the spatial diversity while minimizing the passive energy consumed by radio frequency (RF) components, the BS is equipped with M RF chains and N antennas, where M < N. Upon receiving pilot sequences to obtain the channel state information, the BS determines the best subset of M antennas for serving the users. We propose a joint antenna selection and precoding design (JASPD) algorithm to maximize the system sum rate subject to a transmit power constraint and QoS requirements. The JASPD overcomes the non-convexity of the formulated problem via a doubly iterative algorithm, in which an inner loop successively optimizes the precoding vectors, followed by an outer loop that tries all valid antenna subsets. Although approaching the (near) global optimality, the JASPD suffers from a combinatorial complexity, which may limit its application in real-time network operations. To overcome this limitation, we propose a learning-based antenna selection and precoding design algorithm (L-ASPA), which employs a DNN to establish underlaying relations between the key system parameters and the selected antennas. The proposed L-ASPD is robust against the number of users and their locations, BS's transmit power, as well as the small-scale channel fading. With a well-trained learning model, it is shown that the L-ASPD significantly outperforms baseline schemes based on the block diagonalization and a learning-assisted solution for broadcasting systems and achieves higher effective sum rate than that of the JASPA under limited processing time. In addition, we observed that the proposed L-ASPD can reduce the computation complexity by 95% while retaining more than 95% of the optimal performance.Comment: accepted to the IEEE Transactions on Wireless Communication

    Adaptive Cloud Radio Access Networks: Compression and Optimization

    Full text link

    Defeating Super-Reactive Jammers WithDeception Strategy: Modeling, SignalDetection, and Performance Analysis

    Get PDF
    This paper aims to develop a novel framework to defeat a super-reactive jammer, one of the mostdifficult jamming attacks to deal with in practice. Specifically, the jammer has an unlimited power budgetand is equipped with the self-interference suppression capability to simultaneously attack and listen tothe transmitter’s activities. Consequently, dealing with super-reactive jammers is very challenging. Thus,we introduce a smart deception mechanism to attract the jammer to continuously attack the channel andthen leverage jamming signals to transmit data based on the ambient backscatter communication whichis resilient to radio interference/jamming. To decode the backscattered signals, the maximum likelihood(ML) detector can be adopted. However, the method is notorious for its high computational complexityand require a specific mathematical model for the communication system. Hence, we propose a deeplearning-based detector that can dynamically adapt to any channel and noise distributions. With the LongShort-Term Memory network, our detector can learn the received signals’ dependencies to achieve theperformance close to that of the optimal ML detector. Through simulation and theoretical results, wedemonstrate that with proposed approaches, the more power the jammer uses to attack the channel, thebetter bit error rate performance we can achiev
    corecore